prediction mask
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts (0.05)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- (3 more...)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > Massachusetts (0.05)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- (3 more...)
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- North America > United States > Massachusetts (0.04)
- (2 more...)
How Good Are We? Evaluating Cell AI Foundation Models in Kidney Pathology with Human-in-the-Loop Enrichment
Guo, Junlin, Lu, Siqi, Cui, Can, Deng, Ruining, Yao, Tianyuan, Tao, Zhewen, Lin, Yizhe, Lionts, Marilyn, Liu, Quan, Xiong, Juming, Wang, Yu, Zhao, Shilin, Chang, Catie, Wilkes, Mitchell, Yin, Mengmeng, Yang, Haichun, Huo, Yuankai
Training AI foundation models has emerged as a promising large-scale learning approach for addressing real-world healthcare challenges, including digital pathology. While many of these models have been developed for tasks like disease diagnosis and tissue quantification using extensive and diverse training datasets, their readiness for deployment on some arguably simplest tasks, such as nuclei segmentation within a single organ (e.g., the kidney), remains uncertain. This paper seeks to answer this key question, "How good are we?", by thoroughly evaluating the performance of recent cell foundation models on a curated multi-center, multi-disease, and multi-species external testing dataset. Additionally, we tackle a more challenging question, "How can we improve?", by developing and assessing human-in-the-loop data enrichment strategies aimed at enhancing model performance while minimizing the reliance on pixel-level human annotation. To address the first question, we curated a multicenter, multidisease, and multispecies dataset consisting of 2,542 kidney whole slide images (WSIs). Three state-of-the-art (SOTA) cell foundation models-Cellpose, StarDist, and CellViT-were selected for evaluation. To tackle the second question, we explored data enrichment algorithms by distilling predictions from the different foundation models with a human-in-the-loop framework, aiming to further enhance foundation model performance with minimal human efforts. Our experimental results showed that all three foundation models improved over their baselines with model fine-tuning with enriched data. Interestingly, the baseline model with the highest F1 score does not yield the best segmentation outcomes after fine-tuning. This study establishes a benchmark for the development and deployment of cell vision foundation models tailored for real-world data applications.
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Michigan (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Asia > Middle East > Israel (0.04)
- Government > Regional Government > North America Government > United States Government (0.67)
- Health & Medicine > Diagnostic Medicine > Imaging (0.47)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
AKGNet: Attribute Knowledge-Guided Unsupervised Lung-Infected Area Segmentation
Lung-infected area segmentation is crucial for assessing the severity of lung diseases. However, existing image-text multi-modal methods typically rely on labour-intensive annotations for model training, posing challenges regarding time and expertise. To address this issue, we propose a novel attribute knowledge-guided framework for unsupervised lung-infected area segmentation (AKGNet), which achieves segmentation solely based on image-text data without any mask annotation. AKGNet facilitates text attribute knowledge learning, attribute-image cross-attention fusion, and high-confidence-based pseudo-label exploration simultaneously. It can learn statistical information and capture spatial correlations between image and text attributes in the embedding space, iteratively refining the mask to enhance segmentation. Specifically, we introduce a text attribute knowledge learning module by extracting attribute knowledge and incorporating it into feature representations, enabling the model to learn statistical information and adapt to different attributes. Moreover, we devise an attribute-image cross-attention module by calculating the correlation between attributes and images in the embedding space to capture spatial dependency information, thus selectively focusing on relevant regions while filtering irrelevant areas. Finally, a self-training mask improvement process is employed by generating pseudo-labels using high-confidence predictions to iteratively enhance the mask and segmentation. Experimental results on a benchmark medical image dataset demonstrate the superior performance of our method compared to state-of-the-art segmentation techniques in unsupervised scenarios.
- North America > Canada > Ontario > National Capital Region > Ottawa (0.14)
- Europe > Finland > Pirkanmaa > Tampere (0.04)
- Asia > Middle East > Qatar (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (0.94)
- Health & Medicine > Therapeutic Area (0.90)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.94)
How AI is Helping Vets to Help our Pets
Pets today have a better chance of being successfully treated than ever, thanks to advances in early recognition, diagnosis and treatment. "This is one of the biggest challenges in veterinary pathology. Do you think you can solve it?" Pathologists Dr. Edwards and Dr. Whitley asked in our first meeting. It was December 2018, and our team, Next Generation Technologies, had been founded that year to solve some of the most complex challenges at Mars through technology.
DeepGlobe Road Extraction -- Challenge
The Geoscience and Remote Sensing Society -- one of the well-known communities to learn and contribute to Geospatial Science has sponsored the DeepGlobe machine vision challenge in 2018, which includes the deep analysis of satellite images of Earth. As part of this, I picked up the problem of Road Extraction as roads have always been a crucial part in various aspects be it transportation, traffic management, city planning, road monitoring, GPS navigation, etc. The challenges of DeepGlobe are purely research-based and focus on the real problems. This is something we need to predict. The one caveat here is that we need to have an equal number of classes to consider this metric.